1,556 research outputs found

    Language Constructs for Distributed Real-Time Consistency

    Get PDF
    In this paper, we present a model and language constructs for a distributed real-time system with the goal of allowing the structured specification of functional and timing constraints, along with explicit, early error recovery from timing faults. To do this, we draw on ideas from (non-distributed) real-time programming and distributed transaction-based systems [81]. A complete language is not specified; the constructs described are assumed to be embedded in a block-structured procedural host programming language such as C [9] or C++ [10] (our current preliminary implementation is in C). The model consists of resources, processes, and a global scheduler. Resources are abstractions that export operations to processes, and specify acceptable concurrency of operations to the scheduler. Processes manipulate resources using the exported operations, and specify synchronization and restrictions on concurrency (at the exported operation level) to the scheduler. Examples of the types of information given to the scheduler are that a set of operations should be performed simultaneously , or that a sequence of operations should be performed without interference by another process. The global scheduler embodies the entity or entities that schedule the CPU, memory, devices and other resources in the system. It performs preemptive scheduling of all resources based on dynamic priorities associated with the processes, preserves restrictions on concurrency stated by resources and processes, and is capable of giving guarantees to processes that they will receive resources during a specified future time interval. The remainder of the paper is structured as follows. In the next section, we present language constructs for an expression of timing constraints called temporal scopes, and described resources and processes. Section 3 describes what is required of the global scheduler to support these constructs, and what is entailed in guaranteeing functional consistency.\u27 We conclude in Section 4

    \u3cem\u3eRTC\u3c/em\u3e: Language Support for Real-Time Concurrency

    Get PDF
    This paper presents language constructs for the expression of timing and concurrency requirements in distributed real-time programs. Our programming paradigm combines an object-based paradigm for the specification of shared resources, and a distributed transaction-based paradigm for the specification of application processes. Resources provide abstract views of shared system entities, such as devices and data structures. Each resource has a state and defines a set of actions that can be invoked by processes to examine or change its state. A resource also specifies scheduling constraints on the execution of its actions to ensure the maintenance of its state\u27s consistency. Processes access resources by invoking actions and express precedence, consistency. Processes access resources by invoking actions and express precedence, consistency and timing constraints on action invocations. The implementation of our language constructs with real-time scheduling and locking for concurrency control is also described

    Timed Atomic Commitment

    Get PDF
    In a large class of hard-real-time control applications, components execute concurrently on distributed nodes and must coordinate, under timing constraints, to perform the control task. As such, they perform a type of atomic commitment. Traditional atomic commitment differs, however, because there are no timing constraints; agreement is eventual. We therefore define timed atomic commitment (TAC) which requires the processes to be functionally consistent, but allows the outcome to include an exceptional state, indicating that timing constraints have been violated. We then present centralized and decentralized protocols to implement TAC and a high-level language construct that facilitates its use in distributed real-time programming

    Modeling Reliable Distributed Real-Time Programs

    Get PDF
    A model for distributed hard real-time programs should incorporate real-time characteristics and be capable of analyzing time-related reliability issues. We introduce a model called the Real-Tie Selection/ Resolution (RT-SIR) Model with these capabilities and demonstrate it by example

    Timed Atomic Commitment

    Get PDF
    In a large class of hard-real-time control applications, components execute concurrently on distributed nodes and must coordinate, under timing constraints, to perform the control task. As such, they perform a type of atomic commitment. Traditional atomic commitment differs, however, because there are no timing constraints; agreement is eventual. We therefore define timed atomic commitment (TAC) which requires the processes to be functionally consistent, but allows the outcome to include an exceptional state, indicating that timing constraints have been violated. We then present centralized and decentralized protocols to implement TAC and a high-level language construct that facilitates its use in distributed real-time programming

    Motivating Time as a First Class Entity

    Get PDF
    In hard real-time applications, programs must not only be functionally correct but must also meet timing constraints. Unfortunately, little work has been done to allow a high-level incorporation of timing constraints into distributed real-time programs. Instead the programmer is required to ensure system timing through a complicated synchronization process or through low-level programming, making it difficult to create and modify programs. In this report, we describe six features that must be integrated into a high level language and underlying support system in order to promote time to a first class position in distributed real-time programming systems: expressibility of time, real-time communication, enforcement of timing constraints, fault tolerance to violations of constraints, ensuring distributed system state consistency in the time domain, and static timing verification. For each feature we describe what is required, what related work had been performed, and why this work does not adequately provide sufficient capabilities for distributed real-time programming. We then briefly outline an integrated approach to provide these six features using a high-level distributed programming language and system tools such as compilers, operating systems, and timing analyzers to enforce and verify timing constraints

    Benchmarking real-time distributed object management systems for evolvable and adaptable command and control applications

    Get PDF
    Abstract This paper describes benchmarking for evolvable and adaptable real-time command and control systems Introduction MITRE's Evolvable Real-Time C3 initiative developed an approach that would enable current real-time systems to evolve into the systems of the future. We designed and implemented an infrastructure and data manager so that various applications could be hosted on the infrastructure. Then we completed a follow-on effort to design flexible adaptable distributed object management systems for command and control (C2) systems. Such an adaptable system would switch scheduling algorithms, policies, and protocols depending on the need and the environment. Both initiatives were carried out for the United States Air Force. One of the key contributions of our work is the investigation of real-time features for distributed object management systems. Partly as a result of our work we are now seeing various real-time distributed object management products being developed. In selecting a real-time distributed object management systems, we need to analyze various criteria. Therefore, we need benchmarking studies for realtime distributed object management systems. Although benchmarking systems such as Hartstone and Distributed Hartstone have been developed for middleware systems, these systems are not developed specifically for distributed object-based middleware. Since much of our work is heavily based on distributed objects, we developed benchmarking systems by adapting the Hartstone system. This paper describes out effort on developing benchmarks. In section 2 we discuss Distributed Hartstone. Then in section 3, we first provide background on the original Hartstone and DHartstone designs from SEI (Software Engineering Institute) and CMU (Carnegie Mellon University). We then describe our design and modification of DHartstone to incorporate the capability to benchmark real-time middleware in Section 4. Sections 5 and 6 describe the design of the benchmarking systems. For more details of our work on benchmarking and experimental results we refer to [MAUR98] and [MAUR99]. For background information of our work we refer t

    Interpretation of deep directional resistivity measurements acquired in high-angle and horizontal wells using 3-D inversion

    Get PDF
    The interpretation of resistivity measurements acquired in high-angle and horizontal wells is a critical technical problem in formation evaluation. We develop an efficient parallel 3-D inversion method to estimate the spatial distribution of electrical resistivity in the neighbourhood of a well from deep directional electromagnetic induction measurements. The methodology places no restriction on the spatial distribution of the electrical resistivity around arbitrary well trajectories. The fast forward modelling of triaxial induction measurements performed with multiple transmitter-receiver configurations employs a parallel direct solver. The inversion uses a pre-conditioned gradient-based method whose accuracy is improved using the Wolfe conditions to estimate optimal step lengths at each iteration. The large transmitter-receiver offsets, used in the latest generation of commercial directional resistivity tools, improve the depth of investigation to over 30 m from the wellbore. Several challenging synthetic examples confirmthe feasibility of the full 3-D inversion-based interpretations for these distances, hence enabling the integration of resistivity measurements with seismic amplitude data to improve the forecast of the petrophysical and fluid properties. Employing parallel direct solvers for the triaxial induction problems allows for large reductions in computational effort, thereby opening the possibility to invert multiposition 3-D data in practical CPU times

    Using Biotic Interaction Networks for Prediction in Biodiversity and Emerging Diseases

    Get PDF
    Networks offer a powerful tool for understanding and visualizing inter-species ecological and evolutionary interactions. Previously considered examples, such as trophic networks, are just representations of experimentally observed direct interactions. However, species interactions are so rich and complex it is not feasible to directly observe more than a small fraction. In this paper, using data mining techniques, we show how potential interactions can be inferred from geographic data, rather than by direct observation. An important application area for this methodology is that of emerging diseases, where, often, little is known about inter-species interactions, such as between vectors and reservoirs. Here, we show how using geographic data, biotic interaction networks that model statistical dependencies between species distributions can be used to infer and understand inter-species interactions. Furthermore, we show how such networks can be used to build prediction models. For example, for predicting the most important reservoirs of a disease, or the degree of disease risk associated with a geographical area. We illustrate the general methodology by considering an important emerging disease - Leishmaniasis. This data mining methodology allows for the use of geographic data to construct inferential biotic interaction networks which can then be used to build prediction models with a wide range of applications in ecology, biodiversity and emerging diseases

    Carbohydrate hydrogels with stabilized phage particles for bacterial biosensing: bacterium diffusion studies

    Get PDF
    Bacteriophage particles have been reported as potentially useful in the development of diagnosis tools for pathogenic bacteria as they specifically recognize and lyse bacterial isolates thus confirming the presence of viable cells. One of the most representative microorganisms associated with health care services is the bacterium Pseudomonas aeruginosa, which alone is responsible for nearly 15 % of all nosocomial infections. In this context, structural and functional stabilization of phage particles within biopolymeric hydrogels, aiming at producing cheap (chromogenic) bacterial biosensing devices, has been the goal of a previous research effort. For this, a detailed knowledge of the bacterial diffusion profile into the hydrogel core, where the phage particles lie, is of utmost importance. In the present research effort, the bacterial diffusion process into the biopolymeric hydrogel core was mathematically described and the theoretical simulations duly compared with experimental results, allowing determination of the effective diffusion coefficients of P. aeruginosa in the agar and calcium alginate hydrogels tested.Financial support to Victor M. Balcao, via an Invited Research Scientist fellowship (FAPESP Ref. No. 2011/51077-8) by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP, Sao Paulo, Brazil), is hereby gratefully acknowledged
    • …
    corecore